Generalized polynomial approximations in Markovian decision processes

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Non-Deterministic Policies in Markovian Decision Processes

Markovian processes have long been used to model stochastic environments. Reinforcement learning has emerged as a framework to solve sequential planning and decision-making problems in such environments. In recent years, attempts were made to apply methods from reinforcement learning to construct decision support systems for action selection in Markovian environments. Although conventional meth...

متن کامل

Learning Without State-Estimation in Partially Observable Markovian Decision Processes

Reinforcement learning RL algorithms pro vide a sound theoretical basis for building learning control architectures for embedded agents Unfortunately all of the theory and much of the practice see Barto et al for an exception of RL is limited to Marko vian decision processes MDPs Many real world decision tasks however are inherently non Markovian i e the state of the environ ment is only incomp...

متن کامل

Constrained Markovian decision processes: the dynamic programming approach

We consider semicontinuous controlled Markov models in discrete time with total expected losses. Only control strategies which meet a set of given constraint inequalities are admissible. One has to build an optimal admissible strategy. The main result consists in the constructive development of optimal strategy with the help of the dynamic programming method. The model studied covers the case o...

متن کامل

Reinforcement Learning Algorithms for Average-Payoff Markovian Decision Processes

Reinforcement learning (RL) has become a central paradigm for solving learning-control problems in robotics and artificial intelligence. R L researchers have focussed almost exclusively on problems where the controller has to maximize the discounted sum of payoffs. However, as emphasized by Schwartz (1$X)3), in many problems, e.g., those for which the optimal behavior is a limit cycle, it is mo...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Mathematical Analysis and Applications

سال: 1985

ISSN: 0022-247X

DOI: 10.1016/0022-247x(85)90317-8